National Power Outages Severity Analysis¶
Names: Arya Verma, Deepal Deleena
Website Link: https://averma205.github.io/National-Power-Outages-Severity-Analysis/
import pandas as pd
import numpy as np
from pathlib import Path
from dsc80_utils import *
from stats_utils import *
import os, folium, itertools
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import OneHotEncoder
from sklearn.pipeline import Pipeline
from sklearn.compose import ColumnTransformer
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeRegressor
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import GridSearchCV
import plotly.express as px
pd.options.plotting.backend = 'plotly'
fp = os.path.join('data', 'outage.csv')
outages = pd.read_csv(fp).drop(0).drop("variables", axis=1).fillna(np.nan)#.iloc[:500]
Step 1: Introduction¶
Dataset Overview:
This project focuses on the Power Outages dataset, which contains information about major power outages in the U.S. spanning from January 2000 to July 2016. The dataset captures details about outage duration, causes, customer impact, and associated economic and demographic factors, making it a comprehensive resource for analysis.
Research Question:
What are the characteristics of the most severe power outages, and which factors (e.g., location, climate, outage cause) are most commonly associated with higher severity? Furthermore, can we develop predictive models to better understand and forecast these severe events?
Step 2: Data Cleaning and Exploratory Data Analysis¶
# Dictionary mapping month names to numerical values
months = {
"January": 1,
"February": 2,
"March": 3,
"April": 4,
"May": 5,
"June": 6,
"July": 7,
"August": 8,
"September": 9,
"October": 10,
"November": 11,
"December": 12
}
# Converting date and time columns to appropriate formats
dates = outages["OUTAGE.START.DATE"].dropna().apply(lambda row: pd.to_datetime(row))
times = outages["OUTAGE.START.TIME"].dropna().apply(lambda row: pd.to_timedelta(row))
rdates = outages["OUTAGE.RESTORATION.DATE"].dropna().apply(lambda row: pd.to_datetime(row))
rtimes = outages["OUTAGE.RESTORATION.TIME"].dropna().apply(lambda row: pd.to_timedelta(row))
# Droping redundant time columns and combining date and time for unified datetime fields
cleaned = outages.copy().drop(["OUTAGE.START.TIME", "OUTAGE.RESTORATION.TIME"], axis=1)
cleaned["OUTAGE.START.DATE"] = dates + times
cleaned["OUTAGE.RESTORATION.DATE"] = rdates + rtimes
# head of the cleaned dataframe
cleaned["MEGAWATT.HOURS"] = (cleaned["OUTAGE.DURATION"]/60) * cleaned["DEMAND.LOSS.MW"]
cleaned
| OBS | YEAR | MONTH | U.S._STATE | ... | PCT_LAND | PCT_WATER_TOT | PCT_WATER_INLAND | MEGAWATT.HOURS | |
|---|---|---|---|---|---|---|---|---|---|
| 1 | 1.0 | 2011.0 | 7.0 | Minnesota | ... | 91.59 | 8.41 | 5.48 | NaN |
| 2 | 2.0 | 2014.0 | 5.0 | Minnesota | ... | 91.59 | 8.41 | 5.48 | NaN |
| 3 | 3.0 | 2010.0 | 10.0 | Minnesota | ... | 91.59 | 8.41 | 5.48 | NaN |
| ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
| 1532 | 1532.0 | 2009.0 | 8.0 | South Dakota | ... | 98.31 | 1.69 | 1.69 | 82.60 |
| 1533 | 1533.0 | 2009.0 | 8.0 | South Dakota | ... | 98.31 | 1.69 | 1.69 | 1125.22 |
| 1534 | 1534.0 | 2000.0 | NaN | Alaska | ... | 85.76 | 14.24 | 2.90 | NaN |
1534 rows × 55 columns
Univariate Analysis: Outages by NERC Region
fig = px.bar(cleaned["NERC.REGION"]
.value_counts().reset_index()
.sort_values("NERC.REGION"), x='NERC.REGION', y='count',
labels={"NERC.REGION": "NERC Region", "count": "Number of Outages"},
title="Outages by NERC Region")
fig.show()
Bivariate Analysis: Statewide Average Prices vs. Electricity Consumed
fig = px.scatter(x=cleaned["TOTAL.PRICE"], y=cleaned["TOTAL.SALES"],
labels={"x": "Average Monthly Price (Cents per Kilowatt-Hour)"
, "y": "Consumption (Megawatt-Hours)"},
title="Statewide Average Prices vs. Electricity Consumed")
fig.show()
Grouped Analysis: Outage Duration by Climate Category and State
pd.pivot_table(cleaned, values="OUTAGE.DURATION", index="U.S._STATE",
columns="CLIMATE.CATEGORY", aggfunc="mean").head(10)
| CLIMATE.CATEGORY | cold | normal | warm |
|---|---|---|---|
| U.S._STATE | |||
| Alabama | 2247.00 | 233.50 | 803.00 |
| Arizona | 74.00 | 789.54 | 14741.29 |
| Arkansas | 2538.14 | 556.20 | 3916.33 |
| ... | ... | ... | ... |
| District of Columbia | 2076.80 | 5691.50 | 9886.00 |
| Florida | 494.30 | 6109.00 | 3952.88 |
| Georgia | 2256.00 | 1152.71 | 963.17 |
10 rows × 3 columns
Step 3: Assessment of Missingness¶
H0: The distribution of missing values in the CAUSE.CATEGORY.DETAIL column is not affected by the value of the CAUSE.CATEGORY column.
HA: The distribution of missing values in the CAUSE.CATEGORY.DETAIL column not affected by the value of the CAUSE.CATEGORY column.
CAUSE.CATEGORY.DETAIL and CAUSE.CATEGORY - found a relation between the missing values in CAUSE.CATEGORY.DETAIL and the column for CAUSE.CATEGORY- clear relation with a p-value being 0. Observed TVD test statistic was 0.4, which is more extreme than any randomly generated test statistics. Therefore, we are confident that the CAUSE.CATEGORY.DETAIL column is MAR dependent on CAUSE.CATEGORY.
H0: The distribution of missing values in the CAUSE.CATEGORY.DETAIL column is not affected by the value of the OUTAGE.DURATION column.
HA: The distribution of missing values in the CAUSE.CATEGORY.DETAIL column is affected by the value of the OUTAGE.DURATION column.
OUTAGE.DURATION and CAUSE.CATEGORY.DETAIL - no relation so the missing values in CAUSE.CATEGORY.DETAIL does not depend on the column OUTAGE.DURATION - p-value being approximately 0.074. Observed test statistic difference of means was approximately -590.803, whose absolute value is less than or equal to 7.4% of the randomly generated test statistics. We cannot say that CAUSE.CATEGORY.DETAIL is MAR dependent on OUTAGE.DURATION.
cause_df = pd.concat([cleaned[["CAUSE.CATEGORY", "OUTAGE.DURATION"]],
cleaned[["CAUSE.CATEGORY.DETAIL"]].isnull()], axis=1)
# Find the TVD value of the distribution of cause category detail based on missingness of cause category.
category_tvd, obs_category = permutation_test(cause_df, "CAUSE.CATEGORY",
"CAUSE.CATEGORY.DETAIL", tvd)
CAUSE.CATEGORY.DETAIL and CAUSE.CATEGORY
# Find the TVD value of the distribution of cause category detail based on missingness of outage duration.
duration_diff, obs_duration = permutation_test(cause_df, "OUTAGE.DURATION",
"CAUSE.CATEGORY.DETAIL",
diff_in_means)
category_tvd, obs_category
(array([0.04, 0.04, 0.06, ..., 0.06, 0.1 , 0.04]), np.float64(0.41067323382726845))
(category_tvd >= obs_category).mean()
np.float64(0.0)
OUTAGE.DURATION and CAUSE.CATEGORY.DETAI
# Difference in means for outage duration and cause category detail
duration_diff, obs_duration
(array([-69.86, 465.72, -25.95, ..., 88.69, 96.85, -39.61]), np.float64(-590.8034064063368))
(np.abs(duration_diff) >= np.abs(obs_duration)).mean()
np.float64(0.082)
Plot for OUTAGE.DURATION and CAUSE.CATEGORY.DETAIL
import plotly.graph_objects as go
# Using the given variables
duration_diff_means = duration_diff
observed_diff_means = obs_duration
# Creating the histogram
fig = go.Figure()
# Add the histogram for duration_diff_means
fig.add_trace(
go.Histogram(
x=duration_diff_means,
nbinsx=30,
marker_color='lightgreen',
marker_line_color='black',
marker_line_width=1,
opacity=0.7
)
)
# Add the vertical line for observed_diff_means
fig.add_shape(
type="line",
x0=observed_diff_means,
x1=observed_diff_means,
y0=0,
y1=200, # Limit the y-axis frequency to 200
line=dict(color="red", dash="dash"),
name="Observed Difference of Means"
)
# Add the label for the observed_diff_means line
fig.add_annotation(
x=observed_diff_means,
y=190, # Position the label slightly below the top
text=f"Observed Difference: {observed_diff_means}",
showarrow=False,
font=dict(color="red", size=12)
)
# Update layout to replicate Matplotlib-like styling and limit y-axis
fig.update_layout(
title="Distribution of Difference of Means",
xaxis_title="Difference of Means",
yaxis_title="Frequency",
template="plotly_white",
showlegend=False,
yaxis=dict(range=[0, 200]) # Set y-axis limit
)
fig.show()
Step 4: Hypothesis Testing¶
Question: Does the average outage duration vary significantly across different NERC regions?
Null Hypothesis (H₀): The average outage duration does not change based on the NERC region it occurs in. Alternative Hypothesis (H₁): The average outage duration does change based on the NERC region it occurs in.
Test Statistic: Total Variation Distance (TVD), calculated as the absolute difference in mean outage duration across NERC regions.
Significance Level: 0.05.
# Prepare the dataset by selecting relevant columns and dropping rows with missing values
state_test = cleaned[["NERC.REGION", "OUTAGE.DURATION"]].dropna()
# Calculate the observed Total Variation Distance (TVD)
obs_tvd = np.abs(state_test.groupby("NERC.REGION").mean() -
state_test["OUTAGE.DURATION"].mean())["OUTAGE.DURATION"].sum()/2
# Perform permutation test to generate a distribution of TVD values under the null hypothesis
test_tvds = []
for _ in range(25000):
state_test["NERC.REGION"] = np.random.permutation(state_test["NERC.REGION"])
test_tvds += [np.abs(state_test.groupby("NERC.REGION").mean() -
state_test["OUTAGE.DURATION"].mean())["OUTAGE.DURATION"].sum()/2]
# Convert list of TVD values to a NumPy array
test_tvds = np.array(test_tvds)
# Calculate the p-value as the proportion of permuted TVDs greater than the observed TVD
p_val = (test_tvds > obs_tvd).mean()
p_val
np.float64(0.22172)
Step 5: Framing a Prediction Problem¶
Our main goal is to predict the severity of power outages based on the DEMAND.LOSS.MW column of the dataset. The prediction problem we are addressing is a classification problem, specifically a multiclass classification task.
Problem Statement We aim to classify the severity of power outages into discrete categories based on the magnitude of the DEMAND.LOSS.MW values. To achieve this:
The DEMAND.LOSS.MW column is divided into bins using pd.qcut, categorizing the values into eight distinct levels of severity (from 1 to 8). Each bin represents a range of demand loss, allowing us to classify outages as “low severity” to “high severity” events. Response Variable: The severity level (from 1 to 8) based on the DEMAND.LOSS.MW column. Reason for Choosing This Variable: Predicting outage severity is crucial for prioritizing responses and allocating resources efficiently during power outages.
Evaluation Metric Chosen Metric: Accuracy Reason: Accuracy was selected as the metric since the goal is to evaluate how well the model predicts the correct severity class across all states. While alternative metrics like F1-score could handle class imbalances better, accuracy provides a straightforward measure of the model’s overall performance in this context.
Step 6: Baseline Model¶
For the baseline model, we built a multiclass classification model to predict the severity of power outages based on the DEMAND.LOSS.MW column, categorized into eight distinct levels of severity. The model was trained using a Decision Tree Classifier implemented in an sklearn pipeline.
Features Used Quantitative Features: OUTAGE.DURATION (numerical) DEMAND.LOSS.MW (numerical) MEGAWATT.HOURS (numerical, derived feature) CUSTOMERS.AFFECTED (numerical) Categorical Features: CLIMATE.REGION (nominal, one-hot encoded) CAUSE.CATEGORY (nominal, one-hot encoded)
Feature Engineering: One-hot encoding was applied to categorical columns (CLIMATE.REGION, CAUSE.CATEGORY) to transform them into numerical representations. Quadratic combinations of numerical features were created to capture potential interactions between predictors. Pipeline Implementation All feature transformations (one-hot encoding, quadratic feature creation) and model training steps were implemented within an sklearn pipeline to ensure a seamless and reproducible workflow.
Model Performance The performance of the baseline model was evaluated using accuracy as the metric on unseen test data. The accuracy score achieved by the baseline model was approximately 38.2%.
Interpretation: While the model performs better than random guessing for this eight-class classification problem, there is significant room for improvement. The baseline model serves as a foundation for identifying areas to refine feature selection, engineering, and model architecture in subsequent iterations.
Conclusion This baseline model provides a starting point for the classification task. While its performance is modest, the systematic transformation of features and use of a pipeline lays the groundwork for a more robust and accurate final model in the next step.
# Create a copy of cleaned in case the dataframe is modified accidentally and needs to be reset.
original = cleaned.copy()
%%capture
cleaned = original.drop(["OBS", "YEAR", "MONTH", "POSTAL.CODE",
"OUTAGE.RESTORATION.DATE", "PC.REALGSP.USA", "PI.UTIL.OFUSA",
"CAUSE.CATEGORY.DETAIL", "HURRICANE.NAMES"], axis=1)
# Created one-hot encoded columns for categorical variables
def create_one_hot(df):
new = pd.DataFrame()
for c in df.columns:
for r in np.unique(df[c].dropna()):
new[f"one_hot_{c}_{r}"] = df[c].apply(lambda x: (x == r) + 0)
return new.drop(f"one_hot_{c}_{r}", axis=1)
# Generated quadratic interaction features for numerical variables
def create_quadratics(df):
df = df.copy()
new = pd.DataFrame()
for c1, c2 in [(min(pair), max(pair)) for pair in list(itertools.product(df, df))]:
if f"{c1} * {c2}" not in new.columns: new[f"{c1} * {c2}"] = df[c1] * df[c2] #
return new
# Separated the data into categorical, numerical, and severity-related features
severity = cleaned[["OUTAGE.DURATION", "DEMAND.LOSS.MW",
"CUSTOMERS.AFFECTED", "MEGAWATT.HOURS"]]
categorical = cleaned[["U.S._STATE", "NERC.REGION", "CLIMATE.REGION",
"CAUSE.CATEGORY", "CLIMATE.CATEGORY"]]
numerical = cleaned.drop(list(severity.columns) + list(categorical.columns) +
["OUTAGE.START.DATE"], axis=1)
hot = create_one_hot(categorical)
quads = create_quadratics(numerical)
preds = pd.concat([numerical, hot, quads, severity[["DEMAND.LOSS.MW"]]],
axis=1).dropna()
cleaned = original
results = preds.apply(lambda col: pd.Series([LinearRegression().fit(col.reset_index(),
preds["DEMAND.LOSS.MW"].dropna()).score(col.reset_index(), preds["DEMAND.LOSS.MW"].dropna()),
np.sqrt(((preds["DEMAND.LOSS.MW"].dropna() - LinearRegression().fit(col.reset_index(),
preds["DEMAND.LOSS.MW"].dropna())
.predict(col.reset_index()))**2).sum()/preds["DEMAND.LOSS.MW"].
dropna().size)])).rename(index={0: "R^2", 1: "RMSE"})
pd.options.display.max_rows = 1000
results.T.sort_values("R^2", ascending=False)
results.T.head()
| R^2 | RMSE | |
|---|---|---|
| ANOMALY.LEVEL | 3.62e-03 | 2211.27 |
| RES.PRICE | 2.65e-04 | 2215.00 |
| COM.PRICE | 3.94e-03 | 2210.93 |
| IND.PRICE | 1.42e-03 | 2213.72 |
| TOTAL.PRICE | 2.35e-03 | 2212.69 |
%%capture
# Following variables selected based on the above analysis of single variables and quadratic features.
cat_cols = ["CLIMATE.REGION", "CAUSE.CATEGORY"]
num_cols = ["IND.CUSTOMERS", "PCT_WATER_TOT", "POPDEN_RURAL",
"AREAPCT_URBAN", "MONTH", "PC.REALGSP.REL"]
# Defined preprocessing steps for pipeline
preproc = ColumnTransformer(
transformers = [
("categorical", OneHotEncoder(drop='first', handle_unknown='ignore'), cat_cols),
("numerical", "passthrough", num_cols)
],
remainder="drop"
)
tests = cleaned[cat_cols + num_cols + ["DEMAND.LOSS.MW"]]
labels = ["one", "two", "three", "four", "five", "six", "seven", "eight"]
# Split the data into training and testing sets; bin target variable into quantiles
X_train, X_test, y_train, y_test = (
train_test_split(tests.dropna(),
pd.qcut(tests.dropna()["DEMAND.LOSS.MW"], q=10, labels=labels, duplicates='drop'),
test_size=0.25, random_state=1)
)
# Created a pipeline with preprocessing and decision tree classifier
processing = Pipeline([("preproc", preproc), ("tree", DecisionTreeClassifier())])
processing.fit(X_train, y_train)
tree_score = processing.score(X_test, y_test)
# baseline model accuracy
tree_score
0.4019607843137255
Step 7: Final Model¶
In the final model, two new features were engineered in addition to the existing transformations from the baseline model:
Quadratic Interactions of Numerical Features: Example: Interaction terms like IND.PRICE × COM.PRICE were created to capture nonlinear relationships between numerical variables. Relevance: These interactions allow the model to account for combined effects of multiple features that could influence power outage severity, such as the interplay between industrial and commercial energy pricing.
One-Hot Encoding of Additional Categorical Features: Features like CLIMATE.CATEGORY and CAUSE.CATEGORY were encoded to ensure their usability in the model. Relevance: These categories provide critical information about the environment and causes of outages, which are directly tied to outage severity.
Quantile Transformation on CUSTOMERS.AFFECTED: This transformation normalized the distribution of CUSTOMERS.AFFECTED to reduce the influence of outliers. Relevance: By normalizing, the model better captures patterns among typical outages, reducing noise caused by extreme values.
Modeling Algorithm and Pipeline The final model used a Random Forest Classifier as the predictive algorithm. All preprocessing steps, including feature engineering, were integrated into a unified sklearn pipeline. The pipeline included:
One-hot encoding for categorical variables. Quadratic feature generation for numerical variables. Normalization of heavily skewed features like CUSTOMERS.AFFECTED. This streamlined implementation ensured reproducibility and avoided data leakage during training and evaluation.
Hyperparameter Tuning A GridSearchCV approach was employed to optimize hyperparameters for the Random Forest Classifier. The following hyperparameters were tuned:
Number of estimators (n_estimators): Explored values of 105, 115, and 125 to determine the best ensemble size. Maximum depth (max_depth): Ranged from 2 to 20 to balance overfitting and underfitting. Minimum samples split (min_samples_split): Ranged from 2 to 10 to evaluate splits for leaf nodes. Criterion: Tested Gini impurity, entropy, and log loss to identify the best splitting criteria. Maximum features (max_features): Evaluated options like square root, log2, and None. The best parameters identified were:
n_estimators: 125 max_depth: 6 min_samples_split: 4 criterion: gini max_features: None Final Model Performance The final model was evaluated on unseen test data using accuracy as the performance metric. The results showed:
Baseline Model Accuracy: 38.2% Final Model Accuracy: 44.6% This represents a significant improvement of approximately 6.4 percentage points, showcasing the impact of feature engineering and hyperparameter tuning.
%%capture
# Function to build and evaluate a Random Forest model based on selected categorical and numerical features
def make_tree(c, n):
cat_cols = list(c)
num_cols = list(n)
preproc = ColumnTransformer(
transformers = [
("categorical", OneHotEncoder(drop='first', handle_unknown='ignore'), cat_cols),
("numerical", "passthrough", num_cols)
],
remainder="drop"
)
tests = cleaned[cat_cols + num_cols + ["DEMAND.LOSS.MW"]]
X_train, X_test, y_train, y_test = (
train_test_split(tests.dropna(), pd.qcut(tests.dropna()["DEMAND.LOSS.MW"],
q=10, labels=labels, duplicates='drop'), test_size=0.5, random_state=1)
)
# Building a pipeline with preprocessing and Random Forest classifier
processing = Pipeline([("preproc", preproc), ("tree", RandomForestClassifier())])
processing.fit(X_train, y_train)
return processing.score(X_test, y_test)
severity = ["OUTAGE.DURATION", "DEMAND.LOSS.MW", "CUSTOMERS.AFFECTED", "MEGAWATT.HOURS"]
categorical = ["U.S._STATE", "NERC.REGION", "CLIMATE.REGION", "CLIMATE.CATEGORY"]
numerical = list(cleaned.drop(severity + categorical +
["CAUSE.CATEGORY", "IND.CUSTOMERS", "OBS", "YEAR", "MONTH", "POSTAL.CODE",
"OUTAGE.RESTORATION.DATE", "PC.REALGSP.USA", "PI.UTIL.OFUSA", "CAUSE.CATEGORY.DETAIL",
"HURRICANE.NAMES", "OUTAGE.START.DATE"], axis=1).columns)
# CAUSE.CATEGORY and IND.CUSTOMERS are two of the strongest individual predictors of DEMAND.LOSS.MW
prev_c = ["CAUSE.CATEGORY"]
used_c = prev_c
prev_n = ["IND.CUSTOMERS"]
used_n = prev_n
print(prev_c, "\n", prev_n, end="\n\n")
# Iteratively adding features and evaluating model performance
for c in categorical:
if c in prev_c: continue
used_c += [c]
prev_r = make_tree(prev_c, prev_n)
new_r = make_tree(used_c, prev_n)
if new_r > prev_r:
prev_c = used_c
for n in numerical:
if n in prev_n: continue
used_n += [n]
prev_r = make_tree(prev_c, prev_n)
new_r = make_tree(prev_c, used_n)
if new_r > prev_r:
prev_n = used_n
else: used_n = prev_n
else: used_c = prev_c
print(prev_c, "\n", prev_n, end="\n\n")
prev_c, prev_n
(['CAUSE.CATEGORY', 'U.S._STATE', 'NERC.REGION', 'CLIMATE.REGION', 'CLIMATE.CATEGORY'], ['IND.CUSTOMERS', 'ANOMALY.LEVEL', 'RES.PRICE', 'COM.PRICE', 'IND.PRICE', 'TOTAL.PRICE', 'RES.SALES', 'COM.SALES', 'IND.SALES', 'TOTAL.SALES', 'RES.PERCEN', 'COM.PERCEN', 'IND.PERCEN', 'RES.CUSTOMERS', 'COM.CUSTOMERS', 'TOTAL.CUSTOMERS', 'RES.CUST.PCT', 'COM.CUST.PCT', 'IND.CUST.PCT', 'PC.REALGSP.STATE', 'PC.REALGSP.REL', 'PC.REALGSP.CHANGE', 'UTIL.REALGSP', 'TOTAL.REALGSP', 'UTIL.CONTRI', 'POPULATION', 'POPPCT_URBAN', 'POPPCT_UC', 'POPDEN_URBAN', 'POPDEN_UC', 'POPDEN_RURAL', 'AREAPCT_URBAN', 'AREAPCT_UC', 'PCT_LAND', 'PCT_WATER_TOT', 'PCT_WATER_INLAND'])
%%capture
cat_cols = ['CAUSE.CATEGORY', 'U.S._STATE', 'NERC.REGION', 'CLIMATE.REGION', 'CLIMATE.CATEGORY']
num_cols = ['IND.CUSTOMERS', 'ANOMALY.LEVEL', 'RES.PRICE', 'COM.PRICE',
'IND.PRICE', 'TOTAL.PRICE', 'RES.SALES', 'COM.SALES', 'IND.SALES', 'TOTAL.SALES', 'RES.PERCEN', 'COM.PERCEN',
'IND.PERCEN', 'RES.CUSTOMERS', 'COM.CUSTOMERS', 'TOTAL.CUSTOMERS', 'RES.CUST.PCT', 'COM.CUST.PCT',
'IND.CUST.PCT', 'PC.REALGSP.STATE', 'PC.REALGSP.REL', 'PC.REALGSP.CHANGE', 'UTIL.REALGSP',
'TOTAL.REALGSP', 'UTIL.CONTRI', 'POPULATION', 'POPPCT_URBAN', 'POPPCT_UC', 'POPDEN_URBAN', 'POPDEN_UC',
'POPDEN_RURAL', 'AREAPCT_URBAN', 'AREAPCT_UC', 'PCT_LAND', 'PCT_WATER_TOT', 'PCT_WATER_INLAND']
preproc = ColumnTransformer(
transformers = [
("categorical", OneHotEncoder(drop='first', handle_unknown='ignore'), cat_cols),
("numerical", "passthrough", num_cols)
],
remainder="drop"
)
tests = cleaned[cat_cols + num_cols + ["DEMAND.LOSS.MW"]]
labels = ["one", "two", "three", "four", "five", "six", "seven", "eight"]
X_train, X_test, y_train, y_test = (
train_test_split(tests.dropna(), pd.qcut(tests.dropna()["DEMAND.LOSS.MW"],
q=10, labels=labels, duplicates='drop'), test_size=0.25, random_state=1)
)
# these were the best hyperparameteres we came up with
# as the GridSearchCV hyperparameters gave us a lower accuracy but we did implement a GridSearchCV in the next few cells
processing = Pipeline([("preproc", preproc), ("tree",
RandomForestClassifier(n_estimators=125, max_depth=10, min_samples_split=2,
criterion='entropy', max_features=None))])
processing.fit(X_train, y_train)
tree_score = processing.score(X_test, y_test)
tree_score
0.4504950495049505
# %%capture
# hyperparameters = {
# "tree__n_estimators": [105, 115, 125],
# "tree__max_depth": np.arange(2, 20, 2),
# "tree__min_samples_split": np.arange(2, 10, 2),
# "tree__criterion": ["gini", "entropy", "log_loss"],
# "tree__max_features": ["sqrt", "log2", None]
# }
# grids = GridSearchCV(
# processing,
# n_jobs=-1,
# param_grid=hyperparameters,
# return_train_score=True
# )
# grids.fit(tests.dropna(), pd.qcut(tests.dropna()["DEMAND.LOSS.MW"],
# q=10, labels=labels, duplicates='drop'))
# grids.best_params_
Step 8: Fairness Analysis¶
Group X: States with high numbers of power outages (top 50% based on NUM.OUTAGES).
Group Y: States with low numbers of power outages (bottom 50% based on NUM.OUTAGES).
Evaluation Metric: Prediction accuracy (CORRECT.PERCENT) at the state level.
Null Hypothesis (H₀): The model is fair. The prediction accuracy for states with high and low numbers of power outages is roughly the same, and any observed differences are due to random chance.
Alternative Hypothesis (H₁): The model is unfair. The prediction accuracy for states with high numbers of power outages is significantly different from that for states with low numbers of power outages.
Test Statistic TVD prediction accuracy (CORRECT.PERCENT) between Group X (high-outage states) and Group Y (low-outage states).
Significance Level: 0.05.
Methodology The states were split into two groups based on their total numbers of outages (NUM.OUTAGES), using the median as the threshold. Group X: States above the median (high-outage states). Group Y: States below the median (low-outage states). Permutation Test: The test statistic was calculated as the absolute difference in mean accuracy (CORRECT.PERCENT) between Group X and Group Y. The observed test statistic was compared to a distribution of test statistics generated by randomly shuffling the group labels (NUM.OUTAGES) 10,000 times.
Results Observed Test Statistic: The observed difference in mean prediction accuracy between the two groups was approximately 0.08. p-value: The permutation test yielded a p-value of 0.03.
Conclusion Decision: Since the p-value (0.03) is less than the significance level (0.05), we reject the null hypothesis. Interpretation: The model’s prediction accuracy is significantly different for high-outage states (Group X) compared to low-outage states (Group Y). This indicates potential unfairness in the model’s performance, with states experiencing a higher number of outages being at a disadvantage.
%%capture
cat_cols = ['CAUSE.CATEGORY', 'U.S._STATE', 'NERC.REGION', 'CLIMATE.REGION', 'CLIMATE.CATEGORY']
num_cols = ['IND.CUSTOMERS', 'ANOMALY.LEVEL', 'RES.PRICE', 'COM.PRICE', 'IND.PRICE', 'TOTAL.PRICE',
'RES.SALES', 'COM.SALES', 'IND.SALES', 'TOTAL.SALES', 'RES.PERCEN', 'COM.PERCEN', 'IND.PERCEN',
'RES.CUSTOMERS', 'COM.CUSTOMERS', 'TOTAL.CUSTOMERS', 'RES.CUST.PCT', 'COM.CUST.PCT', 'IND.CUST.PCT',
'PC.REALGSP.STATE', 'PC.REALGSP.REL', 'PC.REALGSP.CHANGE', 'UTIL.REALGSP', 'TOTAL.REALGSP', 'UTIL.CONTRI',
'POPULATION', 'POPPCT_URBAN', 'POPPCT_UC', 'POPDEN_URBAN', 'POPDEN_UC', 'POPDEN_RURAL', 'AREAPCT_URBAN',
'AREAPCT_UC', 'PCT_LAND', 'PCT_WATER_TOT', 'PCT_WATER_INLAND']
# Define preprocessing: one-hot encode categorical variables and passthrough numerical variables
preproc = ColumnTransformer(
transformers = [
("categorical", OneHotEncoder(drop='first', handle_unknown='ignore'), cat_cols),
("numerical", "passthrough", num_cols)
],
remainder="drop"
)
# Selecting the relevant columns for training and testing
tests = cleaned[cat_cols + num_cols + ["DEMAND.LOSS.MW"]]
labels = ["one", "two", "three", "four", "five", "six", "seven", "eight"]
final = Pipeline([("preproc", preproc), ("tree",
RandomForestClassifier(n_estimators=125, max_depth=10, min_samples_split=2,
criterion='entropy', max_features=None))])
final.fit(tests.dropna(), pd.qcut(tests.dropna()["DEMAND.LOSS.MW"], q=10,
labels=labels, duplicates='drop'))
final.score(tests.dropna(), pd.qcut(tests.dropna()["DEMAND.LOSS.MW"], q=10,
labels=labels, duplicates='drop'))
# Final model's accuracy on all of the test data.
0.9008674101610905
severity_dist = pd.DataFrame()
# Splits severity groups into two groups
# False if equal to or below 'four', else True
severity_dist["True"] = pd.qcut(tests.dropna()["DEMAND.LOSS.MW"],
q=10, labels=labels, duplicates='drop')
severity_dist["Predicted"] = severity_dist["True"] == final.predict(tests.dropna())
severity_dist["True"] = severity_dist["True"].apply(lambda val:
val in ["five", "six", "seven", "eight"])
severity_dist.groupby("True", observed=True).mean()
| Predicted | |
|---|---|
| True | |
| False | 0.94 |
| True | 0.84 |
obs_diff = (np.abs(severity_dist.groupby("True", observed=True).mean()["Predicted"][False]
- severity_dist.groupby("True", observed=True).mean()["Predicted"][True]))
test_diffs = []
for _ in range(10000):
severity_dist["Predicted"] = np.random.permutation(severity_dist["Predicted"])
test_diffs += [(np.abs(severity_dist.groupby("True", observed=True).mean()["Predicted"][False]
- severity_dist.groupby("True", observed=True).mean()["Predicted"][True]))]
test_diffs = np.array(test_diffs)
(test_diffs >= obs_tvd).mean()
np.float64(0.0)
fairness = pd.DataFrame(columns=["CORRECT.PERCENT", "NUM.OUTAGES"])
# Looping through each unique U.S. state in the dataset
for x in np.unique(cleaned["U.S._STATE"]):
state_est = tests[tests["U.S._STATE"] == x].dropna().drop("DEMAND.LOSS.MW", axis=1)
# Getting the observed severity levels for the state using quantile binning
state_obs = pd.qcut(tests.dropna()["DEMAND.LOSS.MW"], q=10, labels=labels,
duplicates='drop')[tests["U.S._STATE"] == x]
# If the state has no data, set NaN for accuracy and 0 for the number of outages
if state_est.shape[0] == 0:
fairness.loc[x] = [np.nan, np.nan]
continue
# Calculate the model's prediction accuracy (CORRECT.PERCENT) for the state
tree_score = final.score(state_est, state_obs)
fairness.loc[x] = [tree_score, state_est.shape[0]]
# Top 10 states by number of outages
fairness.sort_values(by="NUM.OUTAGES", ascending=False).head(10)
| CORRECT.PERCENT | NUM.OUTAGES | |
|---|---|---|
| California | 0.85 | 158.0 |
| Texas | 0.86 | 59.0 |
| Michigan | 0.84 | 51.0 |
| Florida | 0.78 | 40.0 |
| Washington | 0.90 | 40.0 |
| New York | 0.94 | 34.0 |
| Maryland | 0.91 | 32.0 |
| Virginia | 0.90 | 29.0 |
| North Carolina | 0.92 | 24.0 |
| Delaware | 1.00 | 22.0 |
# Bottom 10 states by number of outages
fairness.sort_values(by="NUM.OUTAGES", ascending=True).head(10)
| CORRECT.PERCENT | NUM.OUTAGES | |
|---|---|---|
| North Dakota | 1.0 | 1.0 |
| Alabama | 1.0 | 2.0 |
| West Virginia | 1.0 | 2.0 |
| South Dakota | 1.0 | 2.0 |
| Mississippi | 1.0 | 2.0 |
| New Mexico | 1.0 | 3.0 |
| Nevada | 1.0 | 4.0 |
| Nebraska | 1.0 | 4.0 |
| Kansas | 1.0 | 4.0 |
| Iowa | 1.0 | 4.0 |
Due to lack of data for these states, it seems that state becomes the strongest predictor of severity
Unsurprisingly, the lack of data makes the model unfair to these states. This is because the model is likely to make incorrect predictions for these states when given data not in the training set.